254 research outputs found

    Bayesian Active Edge Evaluation on Expensive Graphs

    Full text link
    Robots operate in environments with varying implicit structure. For instance, a helicopter flying over terrain encounters a very different arrangement of obstacles than a robotic arm manipulating objects on a cluttered table top. State-of-the-art motion planning systems do not exploit this structure, thereby expending valuable planning effort searching for implausible solutions. We are interested in planning algorithms that actively infer the underlying structure of the valid configuration space during planning in order to find solutions with minimal effort. Consider the problem of evaluating edges on a graph to quickly discover collision-free paths. Evaluating edges is expensive, both for robots with complex geometries like robot arms, and for robots with limited onboard computation like UAVs. Until now, this challenge has been addressed via laziness i.e. deferring edge evaluation until absolutely necessary, with the hope that edges turn out to be valid. However, all edges are not alike in value - some have a lot of potentially good paths flowing through them, and some others encode the likelihood of neighbouring edges being valid. This leads to our key insight - instead of passive laziness, we can actively choose edges that reduce the uncertainty about the validity of paths. We show that this is equivalent to the Bayesian active learning paradigm of decision region determination (DRD). However, the DRD problem is not only combinatorially hard, but also requires explicit enumeration of all possible worlds. We propose a novel framework that combines two DRD algorithms, DIRECT and BISECT, to overcome both issues. We show that our approach outperforms several state-of-the-art algorithms on a spectrum of planning problems for mobile robots, manipulators and autonomous helicopters

    Efficient motion planning for problems lacking optimal substructure

    Full text link
    We consider the motion-planning problem of planning a collision-free path of a robot in the presence of risk zones. The robot is allowed to travel in these zones but is penalized in a super-linear fashion for consecutive accumulative time spent there. We suggest a natural cost function that balances path length and risk-exposure time. Specifically, we consider the discrete setting where we are given a graph, or a roadmap, and we wish to compute the minimal-cost path under this cost function. Interestingly, paths defined using our cost function do not have an optimal substructure. Namely, subpaths of an optimal path are not necessarily optimal. Thus, the Bellman condition is not satisfied and standard graph-search algorithms such as Dijkstra cannot be used. We present a path-finding algorithm, which can be seen as a natural generalization of Dijkstra's algorithm. Our algorithm runs in O((nBβ‹…n)log⁑(nBβ‹…n)+nBβ‹…m)O\left((n_B\cdot n) \log( n_B\cdot n) + n_B\cdot m\right) time, where~nn and mm are the number of vertices and edges of the graph, respectively, and nBn_B is the number of intersections between edges and the boundary of the risk zone. We present simulations on robotic platforms demonstrating both the natural paths produced by our cost function and the computational efficiency of our algorithm

    Shared Autonomy via Hindsight Optimization

    Full text link
    In shared autonomy, user input and robot autonomy are combined to control a robot to achieve a goal. Often, the robot does not know a priori which goal the user wants to achieve, and must both predict the user's intended goal, and assist in achieving that goal. We formulate the problem of shared autonomy as a Partially Observable Markov Decision Process with uncertainty over the user's goal. We utilize maximum entropy inverse optimal control to estimate a distribution over the user's goal based on the history of inputs. Ideally, the robot assists the user by solving for an action which minimizes the expected cost-to-go for the (unknown) goal. As solving the POMDP to select the optimal action is intractable, we use hindsight optimization to approximate the solution. In a user study, we compare our method to a standard predict-then-blend approach. We find that our method enables users to accomplish tasks more quickly while utilizing less input. However, when asked to rate each system, users were mixed in their assessment, citing a tradeoff between maintaining control authority and accomplishing tasks quickly

    Batch Informed Trees (BIT*): Sampling-based Optimal Planning via the Heuristically Guided Search of Implicit Random Geometric Graphs

    Full text link
    In this paper, we present Batch Informed Trees (BIT*), a planning algorithm based on unifying graph- and sampling-based planning techniques. By recognizing that a set of samples describes an implicit random geometric graph (RGG), we are able to combine the efficient ordered nature of graph-based techniques, such as A*, with the anytime scalability of sampling-based algorithms, such as Rapidly-exploring Random Trees (RRT). BIT* uses a heuristic to efficiently search a series of increasingly dense implicit RGGs while reusing previous information. It can be viewed as an extension of incremental graph-search techniques, such as Lifelong Planning A* (LPA*), to continuous problem domains as well as a generalization of existing sampling-based optimal planners. It is shown that it is probabilistically complete and asymptotically optimal. We demonstrate the utility of BIT* on simulated random worlds in R2\mathbb{R}^2 and R8\mathbb{R}^8 and manipulation problems on CMU's HERB, a 14-DOF two-armed robot. On these problems, BIT* finds better solutions faster than RRT, RRT*, Informed RRT*, and Fast Marching Trees (FMT*) with faster anytime convergence towards the optimum, especially in high dimensions.Comment: 8 Pages. 6 Figures. Video available at http://www.youtube.com/watch?v=TQIoCC48gp
    • …
    corecore